Automatic sleep stage classification for children with sleep disorders | National Security Agency

2021-12-15 00:54:12 By : Ms. Osakadental Liang

Javascript is currently disabled in your browser. When javascript is disabled, some functions of this website will not work.

Open access for scientific and medical research

From submission to the first editing decision.

From editor acceptance to publication.

The above percentage of manuscripts have been rejected in the past 12 months.

Open access to peer-reviewed scientific and medical journals.

Dove Medical Press is a member of OAI.

Batch reprints for the pharmaceutical industry.

We provide real benefits for authors, including fast processing of papers.

Register your specific details and specific drugs of interest, and we will match the information you provide with articles in our extensive database and send you a PDF copy via email in a timely manner.

Back to Journal »Nature and Science of Sleep» Volume 13

Automatic sleep stage classification for children with sleep-disordered breathing using a modular network

Authors: Wang H, Lin G, Li Y, Zhang X, Xu Wei, Wang X, Han D

Published on November 30, 2021, the 2021 volume: 13 pages 2101-2112

DOI https://doi.org/10.2147/NSS.S336344

Single anonymous peer review

Editor approved for publication: Professor Ahmed BaHammam

Wang Huijun,1-3,* Lin Guodong,4,*Li Yanru,1-3 Zhang Xiaoqing,1-3 Xu Wen,1-3 Wang Xingjun,4 Han Demin 1-3 1 Beijing Tongren Otorhinolaryngology Head and Neck Surgery, Capital Medical University Affiliated Hospital, Beijing, People's Republic of China; 2Clinical Diagnosis and Research Center for Obstructive Sleep Apnea Hypopnea Syndrome, Capital Medical University, Beijing, People's Republic of China; 3Key Laboratory of Otorhinolaryngology Head and Neck Surgery, Ministry of Education, Capital Medical University, Beijing; 4 Tsinghua University Shenzhen International Postgraduate Department of Electronic Engineering, Shenzhen, People’s Republic of China *These authors have contributed equally to this work, 100730, People’s Republic of China Tel +86-010-58269335 Fax +86-010-58269331 Email [email protected] Xingjun Wang Tsinghua Shenzhen International Graduate School , Nanshan District, Shenzhen, Nanshan District, Shenzhen, 518055 Tel +86-18038153071 Email [email protected] Purpose: To develop an automatic sleep stage analysis model for children and evaluate the effect of this model on the diagnosis of sleep disordered breathing (SDB). Patients and methods: This study recruited 344 SDB patients between the ages of 2 and 18 who completed polysomnography (PSG) to assess the severity of the disease. We have developed a deep neural network to classify sleep through electroencephalogram (EEG), electrooculogram (EOG) and electromyography (EMG). Model performance is evaluated by accuracy rate, accuracy rate, recall rate, F1 score and Cohen's Kappa coefficient (ĸ). And we compared the differences in sleep parameter calculations between technicians, model collections, and single-channel EEG models. Results: The number of raw data divided into training, verification and testing are 240, 36, and 68, respectively. The best performance appears in the model integration, the accuracy of 5-stages is 83.36% (ĸ=0.7817), and the accuracy of 2-stages is 96.76% (ĸ=0.8236). The single-channel EEG model also showed satisfactory classification. The TST, SE, SOL, W time, N1+N2 time, N3 time, and OAHI were not significantly different between the technician and the model (P>0.05). On the data sets from sleep-EDF-13 and sleep-EDF-18, the average classification accuracy obtained in 5 stages using the proposed method is 92.76% and 91.94%, respectively. Conclusion: The pediatric automatic sleep staging model established in this study has good reliability and universality. In addition, it can also be used to calculate quantitative sleep parameters and assess the severity of SDB. Keywords: sleep disordered breathing, SDB, deep learning, sleep stage, children

Sleep breathing disorder (SDB) represents a series of breathing disorders, from primary snoring (PS) to obstructive sleep apnea (OSA), which interferes with night breathing and sleep structure, which is very common in children at critical stages of growth1 It It is of great significance for early diagnosis and intervention, because SDB has been confirmed to be related to the functions of a variety of organs, including immune response, cardiovascular function, and neurocognitive function. 2

According to the measurement results of the Pediatric Sleep Questionnaire (PSQ), the SDB activity of children with snoring, mouth breathing, or apnea ranged from 7.9% to 13.4%. 3,4 According to guidelines provided by the American Academy of Pediatrics in 2012, the incidence of OSA in children is 1% to 5%. 5 Since overnight polysomnography (PSG) is still the gold standard for diagnosing the severity of SDB, the diagnostic efficiency largely depends on the availability and accessibility of the technology.

Sleep stage classification is the first step in PSG data analysis, according to strict standards set by the American Academy of Sleep Medicine (AASM). 6 It takes approximately 1 to 2 hours for a technician to manually identify the sleep stage. However, the intra-rater reliability (IRR) of the classification is also considered to be a subject with considerable variability. The SIESTA database from EU-funded projects found that, based on a period-by-period comparison, the overall consistency of the AASM standard was 82.0% (ĸ = 0.76). 7 An accurate and user-friendly sleep staging system will help sleep experts and provide critical clinical utility.

Due to the time-consuming and labor-intensive nature of manual methods, several methods for automatic scoring of long-term sleep data have been studied in the past few decades. The accuracy of the model is mostly between 78-90%. 8-12 The development and evaluation of software dedicated to automatic sleep staging (AS) faces several problems, including: (1) The signal-to-noise ratio (SNR) of EEG is low, because the measured brain activity is often referred to as "pseudo" “Shadow” is covered by various environmental, physiological and activity-specific noise sources; (2) The generalization ability of the model needs to be further verified, targeting patients of different ages, pathophysiology, and treatments in the real world. 13,14

The behavior and physiological characteristics of normal children’s sleep are quite different from those of adults’ sleep. 1 There is a dynamic change process in the frequency and amplitude of characteristic waves in different age groups. There are considerable differences within and between individual children. How to correctly classify the sleep stages of children of different ages is related to the assessment of sleep efficiency and the management of sleep-related disorders in children.

Our laboratory adopts a multi-signal multi-model integration strategy and proposes an automated deep neural network with an average accuracy rate of 81.81%. 8 Based on a large sample of unfiltered sleeping children's EEG-respiratory disorders, our research is dedicated to developing an automatic sleep stage analysis model with good generality in the clinical environment.

The study was conducted in accordance with the principles set out in the Declaration of Helsinki and was approved by the Institutional Review Board (IRB) of Beijing Tongren Hospital (TRECKY2017-032). Written informed consent was obtained from each parent of the child to include it in the study and use their medical records. According to the decision of the IRB, this study used sleep-EDF public data sets for model verification without the patient's informed consent.

We recruited children at Beijing Tongren Hospital from January 2017 to June 2021. The inclusion criteria are as follows: (1) 2 to 18 years old; (2) Snoring more than 3 days a week; (3) Total sleep time is more than 6 hours; (4) Parents of the children voluntarily participate in the study and sign an informed consent form. The exclusion criteria are as follows: (1) Inability to complete night polysomnography monitoring; (2) PSG recording integration failure; (3) Due to a large number of artifacts, technicians cannot analyze PSG recordings.

In order to verify the generalization of the model, we used an extended version of the sleep EDF database, which collected data from healthy subjects and some subjects who had mild difficulty falling asleep after taking temazepam but were otherwise healthy for about 20 hours or 9 Hours of PSG records, including sleep-EDF-13 (61 PSG records) and sleep-EDF-18 (197 PSG records). 15 We merged N3 and N4 into N3 and AASM guidelines in accordance with the R&K rules. In order to compare with other articles, we entered the scored PSG data of prolonged awakeness, including only the 30-minute awake time before and after sleep.

PSG uses different computerized data collection systems of Compumedics S series (Australia Compumedics Inc) and Alice 6 (Phillips Inc, USA), including EEG (C3/A2, C4/A1), EOG (ROC, LOC), EMG, ECG (ECG), nasal and oral cannula pressure, breathing (chest and abdomen) movement records, and oxygen saturation (SpO2) pulse oximetry.

Well-trained and experienced (more than 10 years) PSG technicians scored the sleep stages and respiratory events in the 1930s in accordance with the AASM guidelines (2012). 6 It is classified into one of the following five categories: (1) Arousal (W), (2) Non-rapid eye movement stage 1 (N1), (3) Non-rapid eye movement stage 2 (N2), (4) Non-rapid eye movement Eye movement stage 3 (N3) or (5) rapid eye movement stage (REM). The sleep stages of N1 and N2 are also called light sleep, and N3 is called deep sleep. In our study, we also divided the sleep stages into 4 stages (W vs light sleep vs deep sleep vs REM), 3 stages (W vs non-rapid eye movement vs REM) and 2 stages (W vs sleep ).

Obstructive apnea hypopnea index (OAHI) is defined as the number of sleep apnea and hypopnea events per hour, used to indicate the severity of SDB: PS (OAHI<1); mild OSA (1≤OAHI<5); Moderate OSA (5≤OAHI<10); Severe OSA (OAHI≥10). 16,17

The frequency of the PSG data in the A6 data set is 200 Hz, while the frequency in the Compumedics data set is 256 Hz. In order to facilitate subsequent processing and retain the necessary information of the signal, we filter the signal at 50 Hz and then downsample it to 50 Hz. Based on this operation, we can eliminate high-frequency noise without spectrum aliasing. In addition, the amplitude of the signal is scaled to 100.

In a multi-classification task, the neural network will output the confidence level of each category. After softmax processing, the probability vector can be obtained. The cross entropy loss of the network is calculated as follows: (1)

Where p is the true probability vector, q is the probability vector predicted by the network, and K is the total number of classes. When the ground truth probability vector is in one-hot (hard label) form, if i is equal to the ground truth class c, then pi=1, otherwise pi=0. This may lead to overfitting. In order to solve this problem, label smoothing converts hard labels to soft labels: (2)

Where ɛ is a constant and 0<ɛ<1. Label smoothing can improve the generalization ability of neural networks and prevent the network from becoming overconfident and overfitting.

When doctors classify sleep stages, they are not only based on the current period, but also based on adjacent periods. In order to ensure that the neural network can learn the characteristics of adjacent epochs, we use one-to-many labels, that is, the label of the current epoch corresponds to the signals of the current epoch and the adjacent epoch. In our work, we considered the signals of the epoch before and after, which means that each label corresponds to a signal of 90 seconds.

In this work, we used a modular network architecture consisting of a convolution block and a multi-branch convolution block. Modular network architecture has been widely used in today's neural networks, such as VGG-nets18 and ResNets,19 and its effectiveness has been proven by various tasks. 20-23 Neural networks can become deeper without more and more restrictions. Hyperparameters can be obtained by stacking blocks of the same structure.

The structure of the convolution block is shown in Figure 1. The convolutional layer is used for channel number conversion and potential mapping of extracted features. Using shortcut connection to connect the input and output of the second ReLU can make the network deeper without the problems of gradient disappearance and gradient explosion. The length of the feature is halved by the average pooling layer. Before and after the convolutional layer, batch normalization (BN) and ReLU are used respectively. BN normalizes the distribution of the input to ensure that the input of the convolutional layer has the same distribution as much as possible, which can alleviate the problem of gradient disappearance during training and accelerate the training voice of the model. Figure 1 Overview of the convolution block. Note: Batch normalization (BN) and ReLU are used before and after the convolutional layer.

Figure 1 Overview of the convolution block.

Note: Batch normalization (BN) and ReLU are used before and after the convolutional layer.

The structure of the multi-branch convolution block is shown in Figure 2. The multi-branch architecture is widely used by Inception model families 24-26 and ResNeXt.27. The first convolutional layer is used to perform channel number conversion. The feature channels are divided into g groups, which are respectively used as the input of the convolutional layer, and the kernel size is 3×1, and then they are connected together. The third convolutional layer with a kernel size of 1×1 guarantees that the number of channels is the same as the input. We also used shortcuts and average pooling in the convolution block. The multi-branch convolution block can reduce the number of parameters of the network and prevent the model from overfitting. Figure 2 Overview of the multi-branch convolution block. Note: The feature channels are divided into g groups, which are used as the input of the convolutional layer, and then we connect them together. The specific values ​​of c and g are shown in Figure 3.

Figure 2 Overview of the multi-branch convolution block.

Note: The feature channels are divided into g groups, which are used as the input of the convolutional layer, and then we connect them together. The specific values ​​of c and g are shown in Figure 3.

The proposed overall network architecture is shown in Figure 3, which is mainly composed of convolution blocks and multi-branch convolution blocks. After extracting features in the last multi-branch convolution block, we use a global average pooling layer to reduce the amount of model parameters. Although our model has only 160,000 parameters, it works very well, which has fewer parameters than most models. In other words, the model is easier to train and achieves a higher training speed. The optimizer used in model training is Adabound28 (learning rate=0.0001, beta1=0.9, beta2=0.999, gamma=0.001), and the batch size is 32. In addition, with 5 epochs of patience, the correction of verification losses was stopped early. All experiments in this study were performed on NVIDIA GeForce RTX 3090 GPU. Figure 3 Overview of the overall network architecture based on convolution blocks and multi-branch convolution blocks.

Figure 3 Overview of the overall network architecture based on convolution blocks and multi-branch convolution blocks.

The performance of the neural network model is evaluated by the overall accuracy, accuracy, recall, weighted F1 score and Cohen's Kappa coefficient. Their calculation formula is as follows: (3) (4) (5) (6)

Among them, TP, TN, FP and FN are true positive, true negative, false positive and false negative respectively. (7)

Where,, and

Use SPSS 22 software (SPSS Inc., Chicago, IL) for statistical analysis. We use the Shapiro-Wilk test for normal data distribution test. The data is expressed as mean ± standard deviation or median (P25, P75). Analyze variables with normal distribution by t test. Non-normally distributed variables are analyzed by Wilcoxon rank sum test. The Bland-Altman diagram was also used to analyze the consistency constraints between the technician and the model.

The number of raw data collected by Alice 6 and Compumedics S series are 201 and 143, respectively. All epochs are randomly divided into training set, validation set and test set at a ratio of 7:1:2. The demographics and polysomnographic parameters of the different data sets did not differ significantly (Table 1). The distribution of W, N1, N2, N3, and REM is unbalanced in the 1.74:1:7.33:3.94 radio. Table 1 Demographic and polysomnographic data of children in different data sets

Table 1 Demographic and polysomnographic data of children in different data sets

In Table 2, we show the accuracy and Cohen's kappa of different models using channel combinations. In the model integration including C3/A2+C4/A1+LOC+ROC+EMG, C3/A2, C4/A1, and LOC, the best performance with an average accuracy rate of 83.36% (ĸ=0.7817) in 5 stages appeared, And the Republic of China. And the model can successfully classify the awake and sleep stages, with an average accuracy rate of 96.76% (ĸ=0.8236). In the single-channel EEG model, C3 performs better than LOC and EMG. The confusion matrix of network prediction and the classification of the five sleep stages of technicians is shown in Figure 4. Except for single-channel EMG, the accuracy (45.24-53.48%) and sensitivity (14.93-29.82%) of N1 are low compared to the others. Most of them are classified as N2. We also compared the sleep stage classification performance of children of different ages, different data collection systems, and OSA severity. The results are listed in the supplementary materials (Tables S1-S3). Table 2 Comparison of test performance using different input channels Figure 4 shows the confusion matrix of the 5-stage classification between network predictions and technicians (real sleep stages). Abbreviations: Pre, precision; Sen, sensitivity; F1, F1-score.

Note: (A) performance of model integration; (B) performance of models using EEG, EOG and EMG channels; (C) performance of models using EEG and EOG channels; (D) performance of models using EEG (C3/A2) channels The performance of the model; (E) the performance of the model using the EOG (LOG) channel; (F) the performance of the model using the EMG channel.

Table 2 Comparison of test performance using different input channels

Figure 4 shows the confusion matrix of the 5-stage classification between network predictions and technicians (real sleep stages).

Abbreviations: Pre, precision; Sen, sensitivity; F1, F1-score.

Note: (A) performance of model integration; (B) performance of models using EEG, EOG and EMG channels; (C) performance of models using EEG and EOG channels; (D) performance of models using EEG (C3/A2) channels The performance of the model; (E) the performance of the model using the EOG (LOG) channel; (F) the performance of the model using the EMG channel.

There was no significant difference between network predictions using model integration and technicians in terms of total sleep time (TST), sleep efficiency (SE), latency to fall asleep (SOL), wake time, light sleep time, and deep sleep time (P>0.05). This model will overestimate the sleep time of REM by about 7.73 minutes (P<0.001) (Figure 5). Using the single-channel EEG model, we found that it underestimated SOL by approximately 1.63 minutes (Supplementary material, Figure S1). Figure 5 The Bland-Altman diagram shows the consistency of TST, SOL, W time and time between technicians and the model set. N1+N2 (light sleep), N3 (deep sleep) time, REM time. Abbreviations: TST, total sleep time; SE, sleep efficiency; SOL, latency to fall asleep; W, wake-up phase; N1+N2, light sleep; N3, deep sleep; REM, rapid eye movement phase; OAHI, obstructive apnea hypopnea index.

Note: The horizontal solid line represents the upper and lower limits of consistency, and the dotted line represents the average deviation of the model.

Figure 5 The Bland-Altman diagram shows the consistency between the technicians and the model set of TST, SOL, W time, N1+N2 (light sleep) time, N3 (deep sleep) time, and REM time.

Abbreviations: TST, total sleep time; SE, sleep efficiency; SOL, latency to fall asleep; W, wake-up phase; N1+N2, light sleep; N3, deep sleep; REM, rapid eye movement phase; OAHI, obstructive apnea hypopnea index.

Note: The horizontal solid line represents the upper and lower limits of consistency, and the dotted line represents the average deviation of the model.

Using the stage classification of the model set, we reviewed the analysis of respiratory events in the test data set. In Table 3, there is good agreement between the model and the technicians (P=0.303). And there is no error in the diagnosis of OSA severity between two sleep stage analyses. Table 3 Comparison of sleep parameters and breathing parameters of network predictive analysis technicians

Table 3 Comparison of sleep parameters and breathing parameters of network predictive analysis technicians

We performed 4-fold cross-validation to evaluate the performance of models using Fpz-Cz, Pz-Oz, and EOG channels. For the 5 stages, the test using integral data reached 92.76% (ĸ=0.8778) and 91.94% (ĸ=0.8521) accuracy rates of sleep-EDF-13 and sleep-EDF-18 respectively. When a large number of wake-up phases are removed, the performance of the model decreases, and the accuracy rates are 85.75% (ĸ=0.8015) and 84.58% (ĸ=0.7862) of sleep-EDF-13 and sleep-EDF-18, respectively. See Table 4 for comparison of similar studies. Table 4 Performance comparison of various studies on Sleep-EDF public data set

Table 4 Performance comparison of various studies in the Sleep-EDF public data set

This study uses unfiltered PSG raw data for the first time to train a sleep staging model based on a large clinical sample of children with SDB, and compares the quantitative sleep parameters of experts and models under sleep staging, such as total sleep time, sleep efficiency, and latency to fall asleep And the time of each sleep stage. We use a modular network with fewer parameters to propose an automatic sleep stage analysis model for children. For 5 stages, the accuracy rate is 83.36%, and Cohen's Kappa coefficient is 0.7881. The model can accurately distinguish between waking and sleeping stages, and there is no significant difference in the diagnosis of the severity of SDB in children. Compared with similar studies, using sleep-EDF for verification, the model performs well.

Previous studies have trained models based on public data sets (collected from healthy adults and insomnia patients without other diseases), and achieved good accuracy (78-92%). However, the original PSG data may have problems such as electrode shedding, unstable baseline, and high impedance, and the accuracy of the model is affected by factors such as sleep fragmentation and awakening. 8 Prior to this, other studies established based on pediatric PSG data, 10,11 but its sample size, use of more channels, and larger model load may limit its clinical application. Since the EEG patterns of children of different ages are different, we use 2-18 years old to train children's PSG. This model has good generalization. The accuracy rates of sleep staging for children aged 2-6, 7-13 and 14-18 were 82.68%, 83.86% and 84.71%, respectively. Similar to other studies, accuracy decreases as the severity of SDB increases. 8,32 The test set of this study included 10 children with AHI ranging from 5.03 to 39.42. The accuracy of automatic sleep stage classification for five stages was 82.74%, which was slightly lower than that of children with primary snoring and mild OSA. However, the severity of SDB has less impact on classification calculations.

Considering the clinical application of the automatic sleep staging model, we analyzed the performance of different channel combinations and single channel EEG. Except for the single-channel EMG, the accuracy of stage 2 of all models is above 95%, and the performance of stage 5 is equivalent to that of experienced sleep technicians. The lower accuracy of the single-channel EEG model may be related to the smaller differences in different sleep stages. Affected by sweat and electrode shedding, there are many artifacts in EMG, and the difference of EMG signals in N1, N2, and N3 phases is small, which is the reason for the poor effect of the single-channel EMG model. Similar to other studies, due to the small proportion of N1 in this database, the specificity and sensitivity of N1 staging are at a low level. A large amount of N1 is divided into N2 and REM, which may be related to the lack of obvious characteristics of N1 in the transitional stage of sleep. Some studies have performed a separate data amplification on N1-35-, which can improve the diagnostic efficiency of N1. We will use it in future model optimization. Stage N3 accounts for a relatively high proportion of children’s sleep, and slow waves have the characteristics of high amplitude. Compared with the adult EEG staging model, 8,36,37 the accuracy of N3 in this study has been greatly improved.

The ability to compare sleep stage classification and quantify sleep parameters illustrates the usefulness of neural network models. The difference between model integration and technician analysis is small. The model underestimates total sleep time by 1.00 minutes, light sleep (N1+N2) time by 5.21 minutes, N3 time by 3.52 minutes, overestimated sleep latency by 0.44 minutes, W time by 0.56 minutes, and time REM by 7.73 minutes. This inconsistency is tolerable. Similarly, the single-channel EEG model achieved similar performance. Others based on non-contact radar technology, wearable devices, electrocardiogram, respiratory dynamics, etc., are less accurate than the single-channel EEG model in this study in judging different sleep stages. 38-40 Using the automatic sleep stage, we reanalyzed the respiratory event, and the calculated OAHI was not different from the manual analysis. In the future, adding a single-channel EEG module to wearable devices and performing automatic sleep stage classification will more effectively screen children with sleep-disordered breathing.

As shown in Table 4, various deep learning networks have achieved good performance. 10,29-34 Combining EEG and EOG channels, our results are at a leading level. Recently, researchers have begun to consider how to design algorithms to be small, efficient and robust. On the basis of previous research, we have improved the data preprocessing and algorithm: (1) We have filtered the 50Hz signal, so that the EEG feature can be preserved, high frequency noise can be removed, and the amount of calculation can be reduced at the same time. (2) Label smoothing mainly uses soft labels. When calculating the loss function, the weight of the real sample label category is modified, which ultimately plays a role in suppressing overfitting. (3) Compared with other classifiers, 30,41-43 our model has achieved a huge performance improvement, while the number of parameters (the number of parameters that can be trained in a single model is 150,000) is much smaller than other similar studies (Table 5) ) ). Table 5 The 5-stage overall accuracy of similar classifiers and the number of training models with trainable parameters

Table 5 The 5-stage overall accuracy of similar classifiers and the number of training models with trainable parameters

Our research still has some limitations. Although our study population involves children aged 2-18, there is still a lack of validation for children under 2 years of age. SDB patients are mainly primary snoring and mild OSA. The proportion of N1 in the children's clinical sample database is very small, which may affect their performance.

In our research, we have created a modular network automatic sleep stage analysis model with satisfactory reliability and versatility, which can be used to calculate quantitative sleep parameters and evaluate the severity of SDB.

The development of this research is inseparable from the help of participants, technicians and physicians from the Department of Otolaryngology-Head and Neck Surgery of Beijing Tongren Hospital and the Department of Electronic Engineering of Tsinghua University Shenzhen International Graduate School.

Shenzhen Science and Technology Innovation Committee (WDZC20200818121348001), National Natural Science Foundation of China (81970866, 81800894), Chinese Academy of Engineering Consulting Research Project (2019-XZ-29), Shenzhen Natural Science Foundation of China, Shenzhen Science and Technology Innovation Committee (KCXFZ202002011010487).

The authors report no conflicts of interest in this work.

1. Sheldon SH, Ferber R, Kryger MH, Gozal D. The principles and practice of pediatric sleep medicine. Elsevier; 2014.

2. Zandieh SO, Cespedes A, Ciarleglio A, etc. A large number of urban adolescents have asthma and subjective sleep-disordered breathing. J Asthma. 2017;54(1):62-68. doi:10.1080/02770903.2016.1188942

3. Abazi Y, Cenko F, ​​Cardella M, etc. Sleep-disordered breathing: an epidemiological study of children and adolescents in Albania. Int J Environ Res Public Health. 2020;17(22):8586. doi:10.3390/ijerph17228586

4. Guo Yan, Pan Zhi, Gao Fei, etc. Analysis of characteristics and risk factors of sleep disordered breathing in children in Wuxi City[J]. BMC Pediatrics 2020;20(1):310. doi:10.1186/s12887-020-02207-5

5. Marcus CL, Brooks LJ, Draper KA, etc. Diagnosis and treatment of obstructive sleep apnea syndrome in children. Pediatrics. 2012;130(3):576–584. doi:10.1542/peds.2012-1671

6. Berry RB, Budhiraja R, Gottlieb DJ, etc. Scoring rules for sleep-respiratory events: Update the 2007 AASM sleep and related events scoring manual. Deliberation of the Working Group on the Definition of Sleep Apnea of ​​the American Academy of Sleep Medicine. J Clinical Sleep Medicine. 2012; 8(5): 597–619. doi:10.5664/jcsm.2172

7. Danker-Hopfe H, Anderer P, Zeitlhofer J, etc. According to Rechtschaffen & Kales and the new AASM standard, the inter-rater reliability of sleep scores. J Sleep Research. 2009;18(1):74–84. doi:10.1111/j.1365-2869.2008.00700.x

8. Zhang X, Xu Min, Li Y, etc. Automated multi-model deep neural network for sleep stage scoring using unfiltered clinical data. Sleep breathing. 2020; 24(2): 581–590. doi:10.1007/s11325-019-02008-w

9. Peter-Derex L, Berthomier C, Taillard J, etc. Automatic analysis of single-channel sleep EEG in large-scale sleep disorders. J Clinical Sleep Medicine. 2021;17(3):393-402. doi:10.5664/jcsm.8864

10. Huang X, Shirahama K, Li F, et al. Use deconvolutional neural networks to classify children's sleep stages. Artif Intell Med. 2020;110:101981. doi:10.1016/j.artmed.2020.101981

11. Venkatesh K, Poonguzhali S, Mohanavelu K, etc. Use neural network and single-channel EEG to classify sleep stages. IEEE interview. 2019; 7: 96495–96505. doi:10.1109/ACCESS.2019.2928129

12. Sharma M, Tiwari J, Acharya UR. Use the best wavelet filter bank technology and EEG signal to perform automatic sleep stage scoring on healthy and sleep disorders patients. Int J Environ Res Public Health. 2021;18(6):3087. doi:10.3390/ijerph18063087

13. Younes M, Raneri J, Hanly P. Sleep staging in polysomnography: analysis of inter-rater variability. J Clinical Sleep Medicine. 2016; 12(6): 885–894. doi:10.5664/jcsm.5894

14. Roy Y, Banville H, Albuquerque I, etc. EEG analysis based on deep learning: system evaluation. J Neural Engineering. 2019;16(5):051001. doi:10.1088/1741-2552/ab260c

15. Kemp B, Zwinderman AH, Tuk B, etc. Analysis of sleep-dependent neuronal feedback loops: the slow-wave microcontinuity of EEG. IEEE trans-biomedical engineering. 2000;47(9):1185–1194. doi:10.1109/10.867928

16. Satya MJ. Third edition of the International Classification of Sleep Disorders: Highlights and Modifications. Chest. 2014;146(5):1387–1394. doi:10.1378/chest.14-0970

17. Pediatric Surgery Society CM. Guidelines for the diagnosis and treatment of obstructive sleep apnea in children in China (2020). Chinese Journal of Erbiyan Hou Tou Jing Surgery. 2020; 55(8): 729–747.

18. Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. computer science. Year 2014.

19. He Ke, Zhang X, Ren S, etc. Deep residual learning for image recognition. IEEE; 2016.

20. Shelhamer E, Long J, Darrell T. Fully convolutional network for semantic segmentation. IEEE Trans Pattern Anal Mach Intell. 2017; 39(4): 640–651. doi:10.1109/TPAMI.2016.2572683

21. Oord A, Dieleman S, Zen H, etc. WaveNet: A generative model of original audio. arXiv: 1609.03499. 2016.

22. Pinheiro P, Collobert R, Dollar P. Learn to segment object candidates. Advanced neural information processing system. 2015; 2: 1547.

23. Xiong Wei, Droppo J, Huang X, etc. Microsoft 2016 Conversational Speech Recognition System. IEEE; 2016.

24. Szegedy C, Liu W, Jia Y, et al. In-depth understanding of convolution. IEEE Computer Society 2014.

25. Szegedy C, Vanhoucke V, Ioffe S, etc. Rethinking the Inception Architecture for Computer Vision.2016 IEEE Con​​ference on Computer Vision and Pattern Recognition (CVPR). IEEE; 2016; 2818-2826.

26. Szegedy C, Ioffe S, Vanhoucke V, etc. The influence of Inception-v4, Inception-ResNet and residual connections on learning. ICLR seminar; 2016.

27. Thanks S, Girshick R, Dollár P, etc. Aggregate residual conversion of deep neural networks. The 2017 IEEE Computer Vision and Pattern Recognition (CVPR) Conference. IEEE; 2016.

28. Luo Li, Xiong Yong, Liu Ya, et al. Adaptive gradient method with dynamic learning rate bounds. ICLR seminar; 2019.

29. Delimayanti MK, Purnama B, Nguyen NG, etc. Classify brain waves in sleep stage by high-dimensional FFT features from EEG signals. applied Science. 2020; 10:5. doi:10.3390/app10051797

30. Yildirim O, Baloglu UB, Acharya UR. A deep learning model for automatic sleep stage classification using PSG signals. Int J Environ Res Public Health. 2019;16(4):599. doi:10.3390/ijerph16040599

31. da Silveira TLT, Kozakevicius AJ, Rodrigues CR. Single-channel EEG sleep stage classification based on a set of simplified wavelet domain statistical features. Medical bioengineering calculations. 2017;55(2):343–352. doi:10.1007/s11517-016-1519-4

32. Korkalainen H, Aakko J, Nikkonen S, etc. Perform accurate sleep staging based on deep learning in clinical populations suspected of obstructive sleep apnea. IEEE J Biomedical Health Bulletin. 2020;24(7):2073-2081.

33. Mousavi S, Afghah F, Acharya UR. SleepEEGNet: Automatic sleep stage scoring using sequence-to-sequence deep learning method. Public Science Library One. 2019;14(5):e0216456. doi:10.1371/journal.pone.0216456

34. Khalili E, Mohammadzadeh Asl B. Automatic sleep stage classification using temporal convolutional neural networks and new data enhancement technology from the original single-channel EEG. Calculation method program biomedicine. 2021;204:106063. doi:10.1016/j.cmpb.2021.106063

35. Chriskos P, Frantzidis CA, Gkivogkli PT. Automatic sleep staging using convolutional neural network and cortical connection images. IEEE cross-neural network learning system. 2020; 31(1): 113–123. doi:10.1109/TNNLS.2019.2899781

36. Guillot A, Sauvet F, waiting during EH. Dreem Open Datasets: Multi-scoring sleep data set, used to compare human and automatic sleep staging. IEEE Trans-Neural System Rehabilitation Engineering. 2020;28(9):1955-1965. doi:10.1109/TNSRE.2020.3011181

37. Hassan AR, Bhuiyan MI. Automatic sleep staging decision support system for EEG signals using adjustable Q factor wavelet transform and spectral characteristics. J Neurosci method. 2016; 271: 107-118. doi:10.1016/j.jneumeth.2016.07.012

38. Scott H, Lovato N, Lack L. THIM wearable devices are used to estimate the development and accuracy of sleep and wakefulness. National Science Sleep. 2021; 13:39-53. doi:10.2147/NSS.S287048

39. Toften S, Pallesen S, Hrozanova M, etc. Use non-contact radar technology and machine learning (Somnofy®) to verify sleep stage classification. Sleep medicine. 2020; 75:54-61. doi:10.1016/j.sleep.2020.02.022

40. Sun H, Ganglberger W, Panneerselvam E, etc. Sleep staging from ECG and breathing through deep learning. sleep. 2020;43(7):zsz306. doi:10.1093/sleep/zsz306

41. Supratak A, Dong H, Wu C, etc. DeepSleepNet: An automatic sleep stage scoring model based on the original single-channel EEG. IEEE Trans-Neural System Rehabilitation Engineering. 2017;25(11):1998-2008. doi:10.1109/TNSRE.2017.2721116

42. Sors A, Bonnet S, Mirek S, etc. Convolutional neural network for sleep stage scoring from the original single channel EEG. Biomedical signal process control. 2018; 42: 107-114. doi:10.1016/j.bspc.2017.12.001

43. Tsinalis O, Matthews PM, Guo Y. Automatic sleep stage scoring using time-frequency analysis and stacked sparse autoencoders. Ann Biomedical Engineering. 2016;44(5):1587–1597. doi:10.1007/s10439-015-1444-y

This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms.php and include the Creative Commons Attribution-Non-commercial (unported, v3.0) license. By accessing the work, you hereby accept the terms. The use of the work for non-commercial purposes is permitted without any further permission from Dove Medical Press Limited, provided that the work has an appropriate attribution. For permission to use this work for commercial purposes, please refer to paragraphs 4.2 and 5 of our terms.

Contact Us• Privacy Policy• Associations and Partners• Testimonials• Terms and Conditions• Recommend this site• Top

Contact Us• Privacy Policy

© Copyright 2021 • Dove Press Ltd • Software development of maffey.com • Web design of Adhesion

The views expressed in all articles published here are those of specific authors and do not necessarily reflect the views of Dove Medical Press Ltd or any of its employees.

Dove Medical Press is part of Taylor & Francis Group, the academic publishing department of Informa PLC. Copyright 2017 Informa PLC. all rights reserved. This website is owned and operated by Informa PLC ("Informa"), and its registered office address is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067. UK VAT group: GB 365 4626 36

In order to provide our website visitors and registered users with services that suit their personal preferences, we use cookies to analyze visitor traffic and personalize content. You can understand our use of cookies by reading our privacy policy. We also retain data about visitors and registered users for internal purposes and to share information with our business partners. By reading our privacy policy, you can understand which of your data we retain, how to process it, with whom to share it, and your right to delete data.

If you agree to our use of cookies and the content of our privacy policy, please click "Accept".